自我玩法是在马尔可夫游戏中构建解决方案的常见范式,可以在协作环境中产生最佳政策。但是,这些政策通常会采用高度专业的惯例,这使与新颖伴侣的比赛变得困难。为了解决这一问题,最近的方法依赖于将对称性和惯例意识编码为政策培训,但是这些方法需要强烈的环境假设,并使政策培训变得复杂。因此,我们建议将惯例的学习转移到信仰空间。具体而言,我们提出了一种信念学习模型,该模型可以维持对培训时间未观察到的政策推出的信念,因此可以在考试时进行解码和适应新的惯例。我们展示了如何利用这一模型来搜索和培训各种政策池中最佳响应,以极大地改善临时团队游戏。我们还展示了我们的设置如何促进细微的代理惯例的解释性和解释性。
translated by 谷歌翻译
一致性是一元的学习算法,保证了在一定条件下,它可以在测试时间适应任何任务的理论性能。一个悬而未决的问题是,是否以及如何一致性理论转化为实践,在比较不一致的算法。在本文中,我们经验调查的一组代表性元RL算法这个问题。我们发现,在理论上是一致的算法的确可以通常适应外的分布(OOD)的任务,而那些不一致不能,虽然他们可以在实践中仍然无法像勘探不佳的原因。我们进一步发现,理论上不一致的算法可以由通过不断更新的OOD任务的所有剂成分一致,并适应以及或优于原先一致的。我们的结论是理论的一致性确实是一个理想的财产,且不一致元-RL算法可以很容易地做出一致的,享受同样的好处。
translated by 谷歌翻译
我们考虑通过马尔可夫决策过程轨迹传达外源信息的问题。我们称之为马尔可夫编码游戏(MCG)的设置概括了源编码和大量的参考游戏。 MCG还隔离了一个在不可用的分散控制环境中很重要的问题,即不可用的问题 - 即,他们需要平衡沟通与相关的交流成本。我们基于最大的熵增强学习和我们称为模因的最小熵耦合,为MCGS提供理论上的基础方法。由于最近在最小熵耦合的近似算法中突破,模因不仅是理论算法,而且可以应用于实际设置。从经验上讲,我们表明模因能够在小MCG上胜过强大的基线,并且该模因能够在极大的MCG上实现强大的性能。到后点,我们证明了Meme能够通过Cartpole和Pong的轨迹无误地传达二进制图像,同时同时获得最大或接近最大的预期回报,并且甚至在执行器噪声的情况下甚至能够表现良好。
translated by 谷歌翻译
In this paper we present TruFor, a forensic framework that can be applied to a large variety of image manipulation methods, from classic cheapfakes to more recent manipulations based on deep learning. We rely on the extraction of both high-level and low-level traces through a transformer-based fusion architecture that combines the RGB image and a learned noise-sensitive fingerprint. The latter learns to embed the artifacts related to the camera internal and external processing by training only on real data in a self-supervised manner. Forgeries are detected as deviations from the expected regular pattern that characterizes each pristine image. Looking for anomalies makes the approach able to robustly detect a variety of local manipulations, ensuring generalization. In addition to a pixel-level localization map and a whole-image integrity score, our approach outputs a reliability map that highlights areas where localization predictions may be error-prone. This is particularly important in forensic applications in order to reduce false alarms and allow for a large scale analysis. Extensive experiments on several datasets show that our method is able to reliably detect and localize both cheapfakes and deepfakes manipulations outperforming state-of-the-art works. Code will be publicly available at https://grip-unina.github.io/TruFor/
translated by 谷歌翻译
Modelling the temperature of Electric Vehicle (EV) batteries is a fundamental task of EV manufacturing. Extreme temperatures in the battery packs can affect their longevity and power output. Although theoretical models exist for describing heat transfer in battery packs, they are computationally expensive to simulate. Furthermore, it is difficult to acquire data measurements from within the battery cell. In this work, we propose a data-driven surrogate model (LiFe-net) that uses readily accessible driving diagnostics for battery temperature estimation to overcome these limitations. This model incorporates Neural Operators with a traditional numerical integration scheme to estimate the temperature evolution. Moreover, we propose two further variations of the baseline model: LiFe-net trained with a regulariser and LiFe-net trained with time stability loss. We compared these models in terms of generalization error on test data. The results showed that LiFe-net trained with time stability loss outperforms the other two models and can estimate the temperature evolution on unseen data with a relative error of 2.77 % on average.
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
The class of bigraphical lasso algorithms (and, more broadly, 'tensor'-graphical lasso algorithms) has been used to estimate dependency structures within matrix and tensor data. However, all current methods to do so take prohibitively long on modestly sized datasets. We present a novel tensor-graphical lasso algorithm that analytically estimates the dependency structure, unlike its iterative predecessors. This provides a speedup of multiple orders of magnitude, allowing this class of algorithms to be used on large, real-world datasets.
translated by 谷歌翻译
得益于深度学习的最新进展,如今存在复杂的生成工具,这些工具产生了极其现实的综合语音。但是,这种工具的恶意使用是可能的,有可能对我们的社会构成严重威胁。因此,合成语音检测已成为一个紧迫的研究主题,最近提出了各种各样的检测方法。不幸的是,它们几乎没有概括为在训练阶段从未见过的工具产生的合成音频,这使他们不适合面对现实世界的情况。在这项工作中,我们旨在通过提出一种仅利用说话者的生物特征的新检测方法来克服这个问题,而无需提及特定的操纵。由于仅在实际数据上对检测器进行训练,因此可以自动确保概括。建议的方法可以基于现成的扬声器验证工具实现。我们在三个流行的测试集上测试了几种这样的解决方案,从而获得了良好的性能,高概括能力和高度鲁棒性。
translated by 谷歌翻译
我们介绍了IST和Unmabel对WMT 2022关于质量估计(QE)的共享任务的共同贡献。我们的团队参与了所有三个子任务:(i)句子和单词级质量预测;(ii)可解释的量化宽松;(iii)关键错误检测。对于所有任务,我们在彗星框架之上构建,将其与OpenKIWI的预测估计架构连接,并为其配备单词级序列标记器和解释提取器。我们的结果表明,在预处理过程中合并参考可以改善下游任务上多种语言对的性能,并且通过句子和单词级别的目标共同培训可以进一步提高。此外,将注意力和梯度信息结合在一起被证明是提取句子级量化量化宽松模型的良好解释的首要策略。总体而言,我们的意见书在几乎所有语言对的所有三个任务中都取得了最佳的结果。
translated by 谷歌翻译
在本文中,我们根据两个模型提出了一个端到端情感感知的对话代理:答复情绪预测模型,该模型利用对话的上下文来预测适当的情感,以便代理人在其答复中表达表达;以及一个基于预测的情感和对话的上下文的条件的文本生成模型,以产生既适合上下文又适合情感的答复。此外,我们建议使用情感分类模型来评估代理商在模型开发过程中表达的情感。这使我们能够自动评估代理。自动和人类评估结果都表明,用预定义的句子集明确指导文本生成模型导致了明确的改进,包括表达的情感和生成文本的质量。
translated by 谷歌翻译